security game
- North America > Cuba > Holguín Province > Holguín (0.04)
- Asia > Singapore (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Asia > Singapore (0.04)
- (3 more...)
- Leisure & Entertainment > Games (0.70)
- Information Technology (0.46)
- Asia > China (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Leisure & Entertainment > Games (0.97)
- Government (0.68)
- North America > United States > Virginia (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Europe > Latvia > Riga Municipality > Riga (0.04)
- North America > United States > Hawaii (0.04)
- (2 more...)
- Leisure & Entertainment (0.94)
- Energy (0.70)
- Information Technology > Game Theory (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.73)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- Europe > Latvia > Riga Municipality > Riga (0.04)
- North America > United States > Hawaii (0.04)
- (2 more...)
- Leisure & Entertainment (0.94)
- Energy (0.70)
- Information Technology > Game Theory (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.94)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.73)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Motivated by the practical problem of designing a security deployment strategy to protect targets from an adversary the author(s) model and study this as a Stackelberg game. The main result of the author(s) is that the defender can efficiently learn the payoffs of the adversary by carefully deploying resources and observing the adversary's attacks. Clearly, this setting may not be viable in the cases where the cost incurred by the defender on a successful attack is large (such as a terrorist attack) but perhaps is a reasonable strategy for other cases such as drug smuggling. The main result of the paper is a probably approximately optimal algorithm that finds a defender optimal strategy by learning from polynomial (in the number of targets and encoding length of the problem) number of attacks from the adversary.
- Law Enforcement & Public Safety > Terrorism (0.55)
- Leisure & Entertainment > Games (0.36)
PoolFlip: A Multi-Agent Reinforcement Learning Security Environment for Cyber Defense
Cadet, Xavier, Boboila, Simona, Dharmawan, Sie Hendrata, Oprea, Alina, Chin, Peter
Cyber defense requires automating defensive decision-making under stealthy, deceptive, and continuously evolving adversarial strategies. The FlipIt game provides a foundational framework for modeling interactions between a defender and an advanced adversary that compromises a system without being immediately detected. In FlipIt, the attacker and defender compete to control a shared resource by performing a Flip action and paying a cost. However, the existing FlipIt frameworks rely on a small number of heuristics or specialized learning techniques, which can lead to brittleness and the inability to adapt to new attacks. To address these limitations, we introduce PoolFlip, a multi-agent gym environment that extends the FlipIt game to allow efficient learning for attackers and defenders. Furthermore, we propose Flip-PSRO, a multi-agent reinforcement learning (MARL) approach that leverages population-based training to train defender agents equipped to generalize against a range of unknown, potentially adaptive opponents. Our empirical results suggest that Flip-PSRO defenders are $2\times$ more effective than baselines to generalize to a heuristic attack not exposed in training. In addition, our newly designed ownership-based utility functions ensure that Flip-PSRO defenders maintain a high level of control while optimizing performance.
- North America > United States > New Hampshire > Grafton County > Hanover (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Asia > Singapore (0.04)
- (3 more...)
- Leisure & Entertainment > Games (0.70)
- Information Technology (0.46)
- Asia > China (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)